4,727 research outputs found
Coulomb Branch Operators and Mirror Symmetry in Three Dimensions
We develop new techniques for computing exact correlation functions of a
class of local operators, including certain monopole operators, in
three-dimensional abelian gauge theories that have
superconformal infrared limits. These operators are position-dependent linear
combinations of Coulomb branch operators. They form a one-dimensional
topological sector that encodes a deformation quantization of the Coulomb
branch chiral ring, and their correlation functions completely fix the ()-point functions of all half-BPS Coulomb branch operators. Using these
results, we provide new derivations of the conformal dimension of half-BPS
monopole operators as well as new and detailed tests of mirror symmetry. Our
main approach involves supersymmetric localization on a hemisphere with
half-BPS boundary conditions, where operator insertions within the hemisphere
are represented by certain shift operators acting on the wavefunction.
By gluing a pair of such wavefunctions, we obtain correlators on with an
arbitrary number of operator insertions. Finally, we show that our results can
be recovered by dimensionally reducing the Schur index of 4D
theories decorated by BPS 't Hooft-Wilson loops.Comment: 92 pages plus appendices, two figures; v2 and v3: typos corrected,
references adde
Self-Paced Learning: an Implicit Regularization Perspective
Self-paced learning (SPL) mimics the cognitive mechanism of humans and
animals that gradually learns from easy to hard samples. One key issue in SPL
is to obtain better weighting strategy that is determined by minimizer
function. Existing methods usually pursue this by artificially designing the
explicit form of SPL regularizer. In this paper, we focus on the minimizer
function, and study a group of new regularizer, named self-paced implicit
regularizer that is deduced from robust loss function. Based on the convex
conjugacy theory, the minimizer function for self-paced implicit regularizer
can be directly learned from the latent loss function, while the analytic form
of the regularizer can be even known. A general framework (named SPL-IR) for
SPL is developed accordingly. We demonstrate that the learning procedure of
SPL-IR is associated with latent robust loss functions, thus can provide some
theoretical inspirations for its working mechanism. We further analyze the
relation between SPL-IR and half-quadratic optimization. Finally, we implement
SPL-IR to both supervised and unsupervised tasks, and experimental results
corroborate our ideas and demonstrate the correctness and effectiveness of
implicit regularizers.Comment: 12 pages, 3 figure
- …